Quasi-Lagrangian Neural Network for Convex Quadratic Optimization

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Recurrent Neural Network for Solving Strictly Convex Quadratic Programming Problems

In this paper we present an improved neural network to solve strictly convex quadratic programming(QP) problem. The proposed model is derived based on a piecewise equation correspond to optimality condition of convex (QP) problem and has a lower structure complexity respect to the other existing neural network model for solving such problems. In theoretical aspect, stability and global converge...

متن کامل

A Semidefinite Optimization Approach to Quadratic Fractional Optimization with a Strictly Convex Quadratic Constraint

In this paper we consider a fractional optimization problem that minimizes the ratio of two quadratic functions subject to a strictly convex quadratic constraint. First using the extension of Charnes-Cooper transformation, an equivalent homogenized quadratic reformulation of the problem is given. Then we show that under certain assumptions, it can be solved to global optimality using semidefini...

متن کامل

Convex Optimization and Lagrangian Duality

Finally the Lagranage dual function is given by g(~λ, ~ν) = inf~x L(~x,~λ, ~ν) We now make a couple of simple observations. Observation. When L(·, ~λ, ~ν) is unbounded from below then the dual takes the value −∞. Observation. g(~λ, ~ν) is concave1 as it is the infimum of a set of affine2 functions. If x is feasible solution of program (10.2)(10.4), then we have the following L(x,~λ, ~ν) = f0(x)...

متن کامل

Convolutional Neural Network and Convex Optimization

This report shows that the performance of deep convolutional neural network can be improved by incorporating convex optimization techniques. First, we find that the sub-models learned by dropout can be more effectively combined by solving a convex problem. Also, we generalize this idea to models that are not trained by dropout. Compared to traditional methods, we get an improvement of 0.22% and...

متن کامل

Global linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems

We consider an augmented Lagrangian algorithm for minimizing a convex quadratic function subject to linear inequality constraints. Linear optimization is an important particular instance of this problem. We show that, provided the augmentation parameter is large enough, the constraint value converges globally linearly to zero. This property is viewed as a consequence of the proximal interpretat...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Neural Networks

سال: 2008

ISSN: 1045-9227,1941-0093

DOI: 10.1109/tnn.2008.2001183